高维计算(HDC)是用于数据表示和学习的范式,起源于计算神经科学。HDC将数据表示为高维,低精度向量,可用于学习或召回等各种信息处理任务。高维空间的映射是HDC中的一个基本问题,现有方法在输入数据本身是高维时会遇到可伸缩性问题。在这项工作中,我们探索了一个基于哈希的流媒体编码技术。我们正式表明,这些方法在学习应用程序的性能方面具有可比的保证,同时比现有替代方案更有效。我们在一个流行的高维分类问题上对这些结果进行了实验验证,并表明我们的方法很容易扩展到非常大的数据集。
translated by 谷歌翻译
神经架构搜索(NAS)在神经网络(NN)的设计和部署方面具有显着提高的生产率。由于NAS通常通过部分或完全训练多个模型来评估多个模型,因此提高的生产率是以大量碳足迹为代价的。为了减轻这种昂贵的训练例程,零击/成本代理在初始化时分析了NN以产生分数,这与其真正的准确性高度相关。零成本代理目前是由专家设计的,这些专家对可能的算法,数据集和神经体系结构设计空间进行了多个经验测试。这降低了生产率,并且是对零成本代理设计的一种不可持续的方法,因为深度学习用例本质上多样化。此外,现有的零成本代理无法跨越神经体系结构设计空间。在本文中,我们提出了一个基因编程框架,以自动化发现零成本代理以进行神经体系结构评分。我们的方法有效地发现了一个可解释且可推广的零成本代理,该代理在NASBENCH-2010和网络设计空间(NDS)的所有数据集和搜索空间上提供了最高得分 - 准确性的相关性。我们认为,这项研究表明了自动发现可以跨网络体系结构设计空间,数据集和任务的零成本代理的有希望的方向。
translated by 谷歌翻译
极端分类(XC)试图用最大的标签集中标记标签的子集标记数据点。通过使用稀疏,手工制作的功能的XC方法优越,用密集,学习的数据来进行深度XC,以数据点和标签的形式吸引了很多关注。负挖掘技术已成为所有深XC方法的关键组成部分,使它们可以扩展到数百万个标签。然而,尽管最近进步,但培训具有大型编码器体系结构(例如变形金刚)的深入XC模型仍然具有挑战性。本文确定,流行负面挖掘技术的内存通常迫使小型批量尺寸保持小且缓慢的训练。作为回应,本文介绍了Ngame,这是一种轻巧的迷你批次创建技术,可证明可证明准确的内部负面样品。这使得与现有负面采样技术相比,具有更大的迷你批次培训,提供更快的收敛性和更高的精度。发现Ngame的准确性比各种基准数据集的最先进方法要高16%,以进行极端分类,并且在回答搜索引擎查询以响应用户网页时检索搜索引擎查询更准确3%显示个性化广告。在流行搜索引擎的实时A/B测试中,Ngame在点击率率中的收益最高可达23%。
translated by 谷歌翻译
最近的神经结构搜索(NAS)解决方案已经生产出令人印象深刻的结果培训超级网络,然后派生子网,A.K.A.儿童模型从预定义的搜索空间中胜过专家制作的模型。可以为资源受限的边缘设备选择高效且强大的子网,允许它们在野外执行良好。然而,构建任意架构的超级网络仍然是一种挑战,通常可以防止采用这些方法。为了解决这一挑战,我们呈现Bootstrapnas,这是一种自动生成NAS的超网络的软件框架。 Bootstrapnas从流行的体系结构,例如Reset-50或有效的自定义设计中获取预先训练的模型,并自动创建超网络,然后使用最先进的NAS技术来训练超级网络,导致子网,显着优于给定的预先训练模型。我们通过从任意模型存储库生成超级网络并提供结果的超网络来展示解决方案,以获得结果的再现性。
translated by 谷歌翻译
Multiple studies have focused on predicting the prospective popularity of an online document as a whole, without paying attention to the contributions of its individual parts. We introduce the task of proactively forecasting popularities of sentences within online news documents solely utilizing their natural language content. We model sentence-specific popularity forecasting as a sequence regression task. For training our models, we curate InfoPop, the first dataset containing popularity labels for over 1.7 million sentences from over 50,000 online news documents. To the best of our knowledge, this is the first dataset automatically created using streams of incoming search engine queries to generate sentence-level popularity annotations. We propose a novel transfer learning approach involving sentence salience prediction as an auxiliary task. Our proposed technique coupled with a BERT-based neural model exceeds nDCG values of 0.8 for proactive sentence-specific popularity forecasting. Notably, our study presents a non-trivial takeaway: though popularity and salience are different concepts, transfer learning from salience prediction enhances popularity forecasting. We release InfoPop and make our code publicly available: https://github.com/sayarghoshroy/InfoPopularity
translated by 谷歌翻译
Speech systems are sensitive to accent variations. This is especially challenging in the Indian context, with an abundance of languages but a dearth of linguistic studies characterising pronunciation variations. The growing number of L2 English speakers in India reinforces the need to study accents and L1-L2 interactions. We investigate the accents of Indian English (IE) speakers and report in detail our observations, both specific and common to all regions. In particular, we observe the phonemic variations and phonotactics occurring in the speakers' native languages and apply this to their English pronunciations. We demonstrate the influence of 18 Indian languages on IE by comparing the native language pronunciations with IE pronunciations obtained jointly from existing literature studies and phonetically annotated speech of 80 speakers. Consequently, we are able to validate the intuitions of Indian language influences on IE pronunciations by justifying pronunciation rules from the perspective of Indian language phonology. We obtain a comprehensive description in terms of universal and region-specific characteristics of IE, which facilitates accent conversion and adaptation of existing ASR and TTS systems to different Indian accents.
translated by 谷歌翻译
In molecular research, simulation \& design of molecules are key areas with significant implications for drug development, material science, and other fields. Current classical computational power falls inadequate to simulate any more than small molecules, let alone protein chains on hundreds of peptide. Therefore these experiment are done physically in wet-lab, but it takes a lot of time \& not possible to examine every molecule due to the size of the search area, tens of billions of dollars are spent every year in these research experiments. Molecule simulation \& design has lately advanced significantly by machine learning models, A fresh perspective on the issue of chemical synthesis is provided by deep generative models for graph-structured data. By optimising differentiable models that produce molecular graphs directly, it is feasible to avoid costly search techniques in the discrete and huge space of chemical structures. But these models also suffer from computational limitations when dimensions become huge and consume huge amount of resources. Quantum Generative machine learning in recent years have shown some empirical results promising significant advantages over classical counterparts.
translated by 谷歌翻译
Developing and least developed countries face the dire challenge of ensuring that each child in their country receives required doses of vaccination, adequate nutrition and proper medication. International agencies such as UNICEF, WHO and WFP, among other organizations, strive to find innovative solutions to determine which child has received the benefits and which have not. Biometric recognition systems have been sought out to help solve this problem. To that end, this report establishes a baseline accuracy of a commercial contactless palmprint recognition system that may be deployed for recognizing children in the age group of one to five years old. On a database of contactless palmprint images of one thousand unique palms from 500 children, we establish SOTA authentication accuracy of 90.85% @ FAR of 0.01%, rank-1 identification accuracy of 99.0% (closed set), and FPIR=0.01 @ FNIR=0.3 for open-set identification using PalmMobile SDK from Armatura.
translated by 谷歌翻译
Selective classification involves identifying the subset of test samples that a model can classify with high accuracy, and is important for applications such as automated medical diagnosis. We argue that this capability of identifying uncertain samples is valuable for training classifiers as well, with the aim of building more accurate classifiers. We unify these dual roles by training a single auxiliary meta-network to output an importance weight as a function of the instance. This measure is used at train time to reweight training data, and at test-time to rank test instances for selective classification. A second, key component of our proposal is the meta-objective of minimizing dropout variance (the variance of classifier output when subjected to random weight dropout) for training the metanetwork. We train the classifier together with its metanetwork using a nested objective of minimizing classifier loss on training data and meta-loss on a separate meta-training dataset. We outperform current state-of-the-art on selective classification by substantial margins--for instance, upto 1.9% AUC and 2% accuracy on a real-world diabetic retinopathy dataset. Finally, our meta-learning framework extends naturally to unsupervised domain adaptation, given our unsupervised variance minimization meta-objective. We show cumulative absolute gains of 3.4% / 3.3% accuracy and AUC over the other baselines in domain shift settings on the Retinopathy dataset using unsupervised domain adaptation.
translated by 谷歌翻译
Many real-world learning scenarios face the challenge of slow concept drift, where data distributions change gradually over time. In this setting, we pose the problem of learning temporally sensitive importance weights for training data, in order to optimize predictive accuracy. We propose a class of temporal reweighting functions that can capture multiple timescales of change in the data, as well as instance-specific characteristics. We formulate a bi-level optimization criterion, and an associated meta-learning algorithm, by which these weights can be learned. In particular, our formulation trains an auxiliary network to output weights as a function of training instances, thereby compactly representing the instance weights. We validate our temporal reweighting scheme on a large real-world dataset of 39M images spread over a 9 year period. Our extensive experiments demonstrate the necessity of instance-based temporal reweighting in the dataset, and achieve significant improvements to classical batch-learning approaches. Further, our proposal easily generalizes to a streaming setting and shows significant gains compared to recent continual learning methods.
translated by 谷歌翻译